Jacobian-Free Three-Level Trust Region Method for Nonlinear Least Squares Problems

نویسندگان

  • Wei Xu
  • Ning Zheng
  • Ken Hayami
چکیده

Nonlinear least squares (NLS) problems arise in many applications. The common solvers require to compute and store the corresponding Jacobian matrix explicitly, which is too expensive for large problems. In this paper, we propose an effective Jacobian free method especially for large NLS problems because of the novel combination of using automatic differentiation for J(x)v and J (x)v along with the preconditioning ideas that do not require forming the Jacobian matrix J(x) explicitly. Together they yield a new and effective three-level iterative approach. In the outer level, the dogleg/trust region method is employed to solve the NLS problem. At each iteration of the dogleg method, we adopt the iterative linear least squares (LLS) solvers, CGLS or BA-GMRES method, to solve the LLS problem generated at each step of the dogleg method as the middle iteration. In order to accelerate the convergence of the iterative LLS solver, we propose an inner iteration preconditioner based on the weighted Jacobi method. Compared to the common dogleg solver and truncated Newton method, our proposed three level method need not compute the gradient or Jacobian matrix explicitly, and is efficient in computational complexity and memory storage. Furthermore, our method does not rely on the sparsity or structure pattern of the Jacobian, gradient or Hessian matrix. Thus, it can be applied to solve any large general NLS problem. Numerical experiments show that our proposed method is much superior to the common trust region method and truncated Newton method.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On Iterative Krylov-Dogleg Trust-Region Steps for Solving Neural Networks Nonlinear Least Squares Problems

This paper describes a method of dogleg trust-region steps, or restricted Levenberg-Marquardt steps, based on a projection process onto the Krylov subspaces for neural networks nonlinear least squares problems. In particular, the linear conjugate gradient (CG) method works as the inner iterative algorithm for solving the linearized Gauss-Newton normal equation, whereas the outer nonlinear algor...

متن کامل

Computing sparse Hessian and Jacobian approximations with optimal hereditary properties

In nonlinear optimization it is often important to estimate large sparse Hessian or Jacobian matrices, to be used for example in a trust region method. We propose an algorithm for computing a matrix B with a given sparsity pattern from a bundle of the m most recent diierence vectors = h k?m+1 : : : k i ; ? = h k?m+1 : : : k i ; where B should approximately map into ?. In this paper B is chosen ...

متن کامل

A Regularized Gauss-Newton Trust Region Approach to Imaging in Diffuse Optical Tomography

We present a new algorithm for the solution of nonlinear least squares problems arising from parameterized imaging problems with diffuse optical tomographic data [D. Boas et al., IEEE Signal Process. Mag., 18 (2001), pp. 57–75]. The parameterization arises from the use of parametric level sets for regularization [M. E. Kilmer et al., Proc. SPIE, 5559 (2004), pp. 381– 391], [A. Aghasi, M. E. Kil...

متن کامل

On the convergence of an inexact Gauss-Newton trust-region method for nonlinear least-squares problems with simple bounds

We introduce an inexact Gauss-Newton trust-region method for solving bound-constrained nonlinear least-squares problems where, at each iteration, a trust-region subproblem is approximately solved by the Conjugate Gradient method. Provided a suitable control on the accuracy to which we attempt to solve the subproblems, we prove that the method has global and asymptotic fast convergence properties.

متن کامل

Superlinearly convergent exact penalty projected structured Hessian updating schemes for constrained nonlinear least squares: asymptotic analysis

We present a structured algorithm for solving constrained nonlinear least squares problems, and establish its local two-step Q-superlinear convergence. The approach is based on an adaptive structured scheme due to Mahdavi-Amiri and Bartels of the exact penalty method of Coleman and Conn for nonlinearly constrained optimization problems. The structured adaptation also makes use of the ideas of N...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014